本文的目的是纠正对卷积神经网络(CNN)的误解。CNN由卷积层组成,由于重量共享,这些卷积层是偏移的。但是,与普遍的看法相反,即使忽略边界效应以及汇集和亚采样不存在,卷积层也不是翻译等效的。这是因为移位均值是一种离散的对称性,而翻译等效性是连续的对称性。这种离散的系统一般不会继承持续的等效性是对模棱两可的深度学习的基本限制。我们讨论了这一事实的两个含义。首先,尽管没有继承其模型的物理系统的翻译等效性,但CNN在图像处理方面取得了成功。其次,使用CNN求解偏微分方程(PDE)不会导致翻译模化求解器。
translated by 谷歌翻译
The task of image segmentation is to classify each pixel in the image based on the appropriate label. Various deep learning approaches have been proposed for image segmentation that offers high accuracy and deep architecture. However, the deep learning technique uses a pixel-wise loss function for the training process. Using pixel-wise loss neglected the pixel neighbor relationships in the network learning process. The neighboring relationship of the pixels is essential information in the image. Utilizing neighboring pixel information provides an advantage over using only pixel-to-pixel information. This study presents regularizers to give the pixel neighbor relationship information to the learning process. The regularizers are constructed by the graph theory approach and topology approach: By graph theory approach, graph Laplacian is used to utilize the smoothness of segmented images based on output images and ground-truth images. By topology approach, Euler characteristic is used to identify and minimize the number of isolated objects on segmented images. Experiments show that our scheme successfully captures pixel neighbor relations and improves the performance of the convolutional neural network better than the baseline without a regularization term.
translated by 谷歌翻译
The deep learning technique was used to increase the performance of single image super-resolution (SISR). However, most existing CNN-based SISR approaches primarily focus on establishing deeper or larger networks to extract more significant high-level features. Usually, the pixel-level loss between the target high-resolution image and the estimated image is used, but the neighbor relations between pixels in the image are seldom used. On the other hand, according to observations, a pixel's neighbor relationship contains rich information about the spatial structure, local context, and structural knowledge. Based on this fact, in this paper, we utilize pixel's neighbor relationships in a different perspective, and we propose the differences of neighboring pixels to regularize the CNN by constructing a graph from the estimated image and the ground-truth image. The proposed method outperforms the state-of-the-art methods in terms of quantitative and qualitative evaluation of the benchmark datasets. Keywords: Super-resolution, Convolutional Neural Networks, Deep Learning
translated by 谷歌翻译
Unsupervised object discovery aims to localize objects in images, while removing the dependence on annotations required by most deep learning-based methods. To address this problem, we propose a fully unsupervised, bottom-up approach, for multiple objects discovery. The proposed approach is a two-stage framework. First, instances of object parts are segmented by using the intra-image similarity between self-supervised local features. The second step merges and filters the object parts to form complete object instances. The latter is performed by two CNN models that capture semantic information on objects from the entire dataset. We demonstrate that the pseudo-labels generated by our method provide a better precision-recall trade-off than existing single and multiple objects discovery methods. In particular, we provide state-of-the-art results for both unsupervised class-agnostic object detection and unsupervised image segmentation.
translated by 谷歌翻译
Adequate strategizing of agents behaviors is essential to solving cooperative MARL problems. One intuitively beneficial yet uncommon method in this domain is predicting agents future behaviors and planning accordingly. Leveraging this point, we propose a two-level hierarchical architecture that combines a novel information-theoretic objective with a trajectory prediction model to learn a strategy. To this end, we introduce a latent policy that learns two types of latent strategies: individual $z_A$, and relational $z_R$ using a modified Graph Attention Network module to extract interaction features. We encourage each agent to behave according to the strategy by conditioning its local $Q$ functions on $z_A$, and we further equip agents with a shared $Q$ function that conditions on $z_R$. Additionally, we introduce two regularizers to allow predicted trajectories to be accurate and rewarding. Empirical results on Google Research Football (GRF) and StarCraft (SC) II micromanagement tasks show that our method establishes a new state of the art being, to the best of our knowledge, the first MARL algorithm to solve all super hard SC II scenarios as well as the GRF full game with a win rate higher than $95\%$, thus outperforming all existing methods. Videos and brief overview of the methods and results are available at: https://sites.google.com/view/hier-strats-marl/home.
translated by 谷歌翻译
Survival analysis is the branch of statistics that studies the relation between the characteristics of living entities and their respective survival times, taking into account the partial information held by censored cases. A good analysis can, for example, determine whether one medical treatment for a group of patients is better than another. With the rise of machine learning, survival analysis can be modeled as learning a function that maps studied patients to their survival times. To succeed with that, there are three crucial issues to be tackled. First, some patient data is censored: we do not know the true survival times for all patients. Second, data is scarce, which led past research to treat different illness types as domains in a multi-task setup. Third, there is the need for adaptation to new or extremely rare illness types, where little or no labels are available. In contrast to previous multi-task setups, we want to investigate how to efficiently adapt to a new survival target domain from multiple survival source domains. For this, we introduce a new survival metric and the corresponding discrepancy measure between survival distributions. These allow us to define domain adaptation for survival analysis while incorporating censored data, which would otherwise have to be dropped. Our experiments on two cancer data sets reveal a superb performance on target domains, a better treatment recommendation, and a weight matrix with a plausible explanation.
translated by 谷歌翻译
Large language models (LLMs) have been shown to be able to perform new tasks based on a few demonstrations or natural language instructions. While these capabilities have led to widespread adoption, most LLMs are developed by resource-rich organizations and are frequently kept from the public. As a step towards democratizing this powerful technology, we present BLOOM, a 176B-parameter open-access language model designed and built thanks to a collaboration of hundreds of researchers. BLOOM is a decoder-only Transformer language model that was trained on the ROOTS corpus, a dataset comprising hundreds of sources in 46 natural and 13 programming languages (59 in total). We find that BLOOM achieves competitive performance on a wide variety of benchmarks, with stronger results after undergoing multitask prompted finetuning. To facilitate future research and applications using LLMs, we publicly release our models and code under the Responsible AI License.
translated by 谷歌翻译
To analyze this characteristic of vulnerability, we developed an automated deep learning method for detecting microvessels in intravascular optical coherence tomography (IVOCT) images. A total of 8,403 IVOCT image frames from 85 lesions and 37 normal segments were analyzed. Manual annotation was done using a dedicated software (OCTOPUS) previously developed by our group. Data augmentation in the polar (r,{\theta}) domain was applied to raw IVOCT images to ensure that microvessels appear at all possible angles. Pre-processing methods included guidewire/shadow detection, lumen segmentation, pixel shifting, and noise reduction. DeepLab v3+ was used to segment microvessel candidates. A bounding box on each candidate was classified as either microvessel or non-microvessel using a shallow convolutional neural network. For better classification, we used data augmentation (i.e., angle rotation) on bounding boxes with a microvessel during network training. Data augmentation and pre-processing steps improved microvessel segmentation performance significantly, yielding a method with Dice of 0.71+/-0.10 and pixel-wise sensitivity/specificity of 87.7+/-6.6%/99.8+/-0.1%. The network for classifying microvessels from candidates performed exceptionally well, with sensitivity of 99.5+/-0.3%, specificity of 98.8+/-1.0%, and accuracy of 99.1+/-0.5%. The classification step eliminated the majority of residual false positives, and the Dice coefficient increased from 0.71 to 0.73. In addition, our method produced 698 image frames with microvessels present, compared to 730 from manual analysis, representing a 4.4% difference. When compared to the manual method, the automated method improved microvessel continuity, implying improved segmentation performance. The method will be useful for research purposes as well as potential future treatment planning.
translated by 谷歌翻译
大多数人工智能(AI)研究都集中在高收入国家,其中成像数据,IT基础设施和临床专业知识丰富。但是,在需要医学成像的有限资源环境中取得了较慢的进步。例如,在撒哈拉以南非洲,由于获得产前筛查的机会有限,围产期死亡率的率很高。在这些国家,可以实施AI模型,以帮助临床医生获得胎儿超声平面以诊断胎儿异常。到目前为止,已经提出了深度学习模型来识别标准的胎儿平面,但是没有证据表明它们能够概括获得高端超声设备和数据的中心。这项工作研究了不同的策略,以减少在高资源临床中心训练并转移到新的低资源中心的胎儿平面分类模型的域转移效果。为此,首先在丹麦的一个新中心对1,008例患者的新中心进行评估,接受了1,008名患者的新中心,后来对五个非洲中心(埃及,阿尔及利亚,乌干达,加纳和马拉维进行了相同的表现),首先在丹麦的一个新中心进行评估。 )每个患者有25名。结果表明,转移学习方法可以是将小型非洲样本与发达国家现有的大规模数据库相结合的解决方案。特别是,该模型可以通过将召回率提高到0.92 \ pm 0.04 $,同时又可以维持高精度。该框架显示了在临床中心构建可概括的新AI模型的希望,该模型在具有挑战性和异质条件下获得的数据有限,并呼吁进行进一步的研究,以开发用于资源较少的国家 /地区的AI可用性的新解决方案。
translated by 谷歌翻译
网络体系结构搜索(NAS)已成为开发和发现针对不同目标平台和目的的新神经体系结构的常见方法。但是,扫描搜索空间由许多候选体系结构的长期培训过程组成,这在计算资源和时间方面是昂贵的。回归算法是预测候选体系结构准确性的常见工具,可以极大地加速搜索过程。我们旨在提出一个新的基准,该基线将支持回归算法的开发,该算法可以通过其计划来预测建筑的准确性,或者仅通过对其进行训练以最少的时代进行训练。因此,我们介绍了440个神经体系结构的NAAP-440数据集,这些数据集使用固定配方在CIFAR10进行培训。我们的实验表明,通过使用现成的回归算法并运行多达10%的培训过程,不仅可以准确地预测体系结构的准确性,而且还可以预测架构预测的值,还可以保持其准确性顺序违反单调性的数量最少。这种方法可以作为加速基于NAS的研究的强大工具,从而大大提高其效率。研究中使用的数据集和代码已公开。
translated by 谷歌翻译